Sturm–Liouville Theory
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
and its applications, classical Sturm–Liouville theory is the theory of ''real'' second-order ''linear''
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast w ...
s of the form: for given coefficient functions , , and , an unknown function ''y = y''(''x'') of the free variable , and an unknown constant λ. All
homogeneous Homogeneity and heterogeneity are concepts often used in the sciences and statistics relating to the uniformity of a substance or organism. A material or image that is homogeneous is uniform in composition or character (i.e. color, shape, siz ...
(i.e. with the right-hand side equal to zero) second-order linear ordinary differential equations can be reduced to this form. In addition, the solution is typically required to satisfy some boundary conditions at extreme values of ''x''. Each such equation () together with its boundary conditions constitutes a Sturm–Liouville problem. In the simplest case where all coefficients are continuous on the finite closed interval and has continuous derivative, a function ''y = y''(''x'') is called a ''solution'' if it is continuously differentiable and satisfies the equation () at every x\in (a,b). In the case of more general , , , the solutions must be understood in a weak sense. The function , sometimes denoted , is called the ''weight'' or ''density'' function. The value of is not specified in the equation: finding the for which there exists a
non-trivial In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a ...
solution is part of the given Sturm–Liouville problem. Such values of , when they exist, are called the ''eigenvalues'' of the problem, and the corresponding solutions are the ''eigenfunctions'' associated to each . This terminology is because the solutions correspond to the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
and
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
s of a
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature m ...
differential operator in an appropriate
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
of functions with inner product defined using the weight function. Sturm–Liouville theory studies the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in the function space. This theory is important in
applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical s ...
, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ...
s. For example, in
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
, the one-dimensional time-independent
Schrödinger equation The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the ...
is a Sturm–Liouville problem. A Sturm–Liouville problem is said to be ''regular'' if , , and , , , are continuous functions over the finite interval , and the problem has separated boundary conditions of the form: The main result of Sturm–Liouville theory states that, for the regular Sturm–Liouville problem (), (), (): * The eigenvalues are real and can be numbered so that \lambda_1 < \lambda_2 < \lambda_3 < \cdots < \lambda_n < \cdots \to \infty; * Corresponding to each eigenvalue is a unique (up to constant multiple) eigenfunction with exactly zeros in , called the th ''fundamental solution''. * The normalized eigenfunctions form an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
under the ''w-''weighted inner product in the
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
L^2( ,b w(x)\,dx). That is: \langle y_n,y_m\rangle = \int_a^b y_n(x)y_m(x)w(x)\,dx = \delta_, where is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
. The theory is named after
Jacques Charles François Sturm Jacques Charles François Sturm (29 September 1803 – 15 December 1855) was a French mathematician. Life and work Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg ...
(1803–1855) and
Joseph Liouville Joseph Liouville (; ; 24 March 1809 – 8 September 1882) was a French mathematician and engineer. Life and work He was born in Saint-Omer in France on 24 March 1809. His parents were Claude-Joseph Liouville (an army officer) and Thérèse ...
(1809–1882).


Reduction to Sturm–Liouville form

The differential equation () is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast w ...
s can be recast in the form on the left-hand side of () by multiplying both sides of the equation by an appropriate
integrating factor In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calcul ...
(although the same is not true of second-order
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ...
s, or if is a
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
). Some examples are below.


Bessel equation Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...

x^2y'' + xy' + \left(x^2-\nu^2\right)y = 0 which can be written in Sturm–Liouville form (first by dividing through by , then by collapsing the first two terms on the left into one term) as \left(xy'\right)'+ \left (x-\frac x \right )y=0.


Legendre equation

\left(1-x^2\right)y''-2xy'+\nu(\nu+1)y=0 which can easily be put into Sturm–Liouville form, since , so the Legendre equation is equivalent to \left (\left(1-x^2\right)y' \right )'+\nu(\nu+1)y=0


Example using an integrating factor

x^3y''-xy'+2y=0 Divide throughout by : y''-\fracy'+\fracy=0 Multiplying throughout by an
integrating factor In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calcul ...
of \mu(x) =\exp\left(\int -\frac\right)=e^, gives e^y''-\frac y'+ \frac y = 0 which can be easily put into Sturm–Liouville form since \frac e^ = -\frac so the differential equation is equivalent to \left (e^y' \right )'+\frac y = 0.


Integrating factor for general second-order equation

P(x)y'' + Q(x)y' + R(x)y=0 Multiplying through by the integrating factor \mu(x) = \frac 1 \exp \left(\int \frac \, dx\right), and then collecting gives the Sturm–Liouville form: \frac \left(\mu(x)P(x)y'\right) + \mu(x)R(x)y = 0, or, explicitly: \frac \left(\exp\left (\int \frac \,dx\right)y' \right )+\frac \exp \left(\int \frac\, dx\right) y = 0.


Sturm–Liouville equations as self-adjoint differential operators

The mapping defined by: Lu = -\frac \left(\frac\left (x)\,\frac\rightq(x)u \right) can be viewed as a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
mapping a function to another function , and it can be studied in the context of
functional analysis Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. Inner product space#Definition, inner product, Norm (mathematics)#Defini ...
. In fact, equation () can be written as Lu = \lambda u. This is precisely the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
problem; that is, one seeks eigenvalues and the corresponding eigenvectors of the operator. The proper setting for this problem is the
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
L^2( ,bw(x)\,dx) with scalar product \langle f, g\rangle = \int_a^b \overline g(x)w(x)\, dx. In this space is defined on sufficiently smooth functions which satisfy the above regular boundary conditions. Moreover, ''L'' is a
self-adjoint In mathematics, and more specifically in abstract algebra, an element ''x'' of a *-algebra is self-adjoint if x^*=x. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection ''C'' of elements of a st ...
operator: \langle L f, g \rangle = \langle f, L g \rangle . This can be seen formally by using
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. ...
twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem, one looks at the resolvent \left (L - z\right)^, \qquad z \in \Reals, where is not an eigenvalue. Then, computing the resolvent amounts to solving a nonhomogeneous equation, which can be done using the
variation of parameters In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. For first-order inhomogeneous linear differential equations it is usually possible t ...
formula. This shows that the resolvent is an
integral operator An integral operator is an operator that involves integration. Special instances are: * The operator of integration itself, denoted by the integral symbol * Integral linear operators, which are linear operators induced by bilinear forms invol ...
with a continuous symmetric kernel (the
Green's function In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions. This means that if \operatorname is the linear differenti ...
of the problem). As a consequence of the
Arzelà–Ascoli theorem The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded inter ...
, this integral operator is compact and existence of a sequence of eigenvalues which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that \left(L-z\right)^ u = \alpha u, \qquad L u = \left(z+\alpha^\right) u, are equivalent, so we may take \lambda = z+\alpha^ with the same eigenfunctions. If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls singular. In this case, the spectrum no longer consists of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
, since the one-dimensional time-independent
Schrödinger equation The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the ...
is a special case of a Sturm–Liouville equation.


Application to inhomogeneous second-order boundary value problems

Consider a general inhomogeneous second-order linear differential equation P(x)y'' + Q(x)y' +R(x)y = f for given functions P(x), Q(x), R(x),f(x). As before, this can be reduced to the Sturm–Liouville form Ly = f: writing a general Sturm–Liouville operator as: Lu = \fracu'' + \fracu' + \fracu, one solves the system: p = Pw,\quad p' = Qw,\quad q = Rw. It suffices to solve the first two equations, which amounts to solving , or w' = \fracw:= \alpha w. A solution is: w = \exp\left(\int\alpha \, dx\right), \quad p = P \exp\left(\int\alpha \, dx\right), \quad q = R \exp\left(\int\alpha \, dx\right). Given this transformation, one is left to solve: Ly = f. In general, if initial conditions at some point are specified, for example and , a second order differential equation can be solved using ordinary methods and the
Picard–Lindelöf theorem In mathematics – specifically, in differential equations – the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauc ...
ensures that the differential equation has a unique solution in a neighbourhood of the point where the initial conditions have been specified. But if in place of specifying initial values at a ''single point'', it is desired to specify values at ''two'' different points (so-called boundary values), e.g. and , the problem turns out to be much more difficult. Notice that by adding a suitable known differentiable function to , whose values at and satisfy the desired boundary conditions, and injecting inside the proposed differential equation, it can be assumed without loss of generality that the boundary conditions are of the form and . Here, the Sturm–Liouville theory comes in play: indeed, a large class of functions can be expanded in terms of a series of orthonormal eigenfunctions of the associated Liouville operator with corresponding eigenvalues : f(x) = \sum_i \alpha_i u_i(x), \quad \alpha_i \in . Then a solution to the proposed equation is evidently: y = \sum_i \frac u_i. This solution will be valid only over the open interval , and may fail at the boundaries.


Example: Fourier series

Consider the Sturm–Liouville problem: for the unknowns are and . For boundary conditions, we take for example: u(0) = u(\pi) = 0. Observe that if is any integer, then the function u_k(x) = \sin kx is a solution with eigenvalue . We know that the solutions of a Sturm–Liouville problem form an
orthogonal basis In mathematics, particularly linear algebra, an orthogonal basis for an inner product space V is a basis for V whose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basis is an orthonormal basis ...
, and we know from
Fourier series A Fourier series () is a summation of harmonically related sinusoidal functions, also known as components or harmonics. The result of the summation is a periodic function whose functional form is determined by the choices of cycle length (or ''p ...
that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the Sturm–Liouville problem in this case has no other eigenvectors. Given the preceding, let us now solve the inhomogeneous problem L y =x, \qquad x\in(0,\pi) with the same boundary conditions y(0) = y(\pi) = 0. In this case, we must expand as a Fourier series. The reader may check, either by integrating or by consulting a table of Fourier transforms, that we thus obtain L y = \sum_^\infty -2\frac k \sin kx. This particular Fourier series is troublesome because of its poor convergence properties. It is not clear ''a priori'' whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are " square-summable", the Fourier series converges in which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier series converge at every point of differentiability, and at jump points (the function ''x'', considered as a periodic function, has a jump at ) converges to the average of the left and right limits (see
convergence of Fourier series In mathematics, the question of whether the Fourier series of a periodic function converges to a given function is researched by a field known as classical harmonic analysis, a branch of pure mathematics. Convergence is not necessarily given in th ...
). Therefore, by using formula (), we obtain the solution: y=\sum_^\infty 2\frac\sin kx= \tfrac 1 6 (x^3 -\pi^2 x). In this case, we could have found the answer using antidifferentiation, but this is no longer useful in most cases when the differential equation is in many variables.


Application to partial differential equations


Normal modes

Certain
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ...
s can be solved with the help of Sturm–Liouville theory. Suppose we are interested in the
vibrational mode A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. ...
s of a thin membrane, held in a rectangular frame, , . The equation of motion for the vertical membrane's displacement, is given by the
wave equation The (two-way) wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields — as they occur in classical physics — such as mechanical waves (e.g. water waves, sound waves and s ...
: \frac+\frac = \frac 1 \frac. The method of
separation of variables In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs ...
suggests looking first for solutions of the simple form . For such a function the partial differential equation becomes . Since the three terms of this equation are functions of separately, they must be constants. For example, the first term gives for a constant . The boundary conditions ("held in a rectangular frame") are when , or , and define the simplest possible Sturm–Liouville eigenvalue problems as in the example, yielding the "normal mode solutions" for with harmonic time dependence, W_(x,y,t) = A_ \sin\left(\frac\right) \sin\left(\frac\right)\cos\left(\omega_t\right) where and are non-zero
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign (−1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the language ...
s, are arbitrary constants, and \omega^2_ = c^2 \left(\frac+\frac\right). The functions form a basis for the
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
of (generalized) solutions of the wave equation; that is, an arbitrary solution can be decomposed into a sum of these modes, which vibrate at their individual frequencies . This representation may require a convergent infinite sum.


Second-order linear equation

For a linear second-order in one spatial dimension and first-order in time of the form: f(x) \frac + g(x) \frac + h(x) u= \frac + k(t) u, u(a,t)=u(b,t)=0, \qquad u(x,0)=s(x). Separating variables, we assume that u(x,t) = X(x) T(t). Then our above partial differential equation may be written as: \frac = \frac where \hat=f(x) \frac+g(x) \frac+h(x), \qquad \hat = \frac + k(t). Since, by definition, and are independent of time and and are independent of position , then both sides of the above equation must be equal to a constant: \hat X(x) =\lambda X(x),\qquad X(a)=X(b)=0,\qquad \hat T(t) =\lambda T(t). The first of these equations must be solved as a Sturm–Liouville problem in terms of the eigenfunctions and eigenvalues . The second of these equations can be analytically solved once the eigenvalues are known. \frac T_n (t)= \bigl(\lambda_n -k(t)\bigr) T_n (t) T_n (t) = a_n \exp \left(\lambda_n t -\int_0^t k(\tau) \, d\tau\right) u(x,t) =\sum_n a_n X_n (x) \exp \left(\lambda_n t -\int_0^t k(\tau) \, d\tau\right) a_n =\frac where \bigl\langle y(x),z(x)\bigr\rangle = \int_a^b y(x) z(x) w(x) \, dx, w(x)= \frac.


Representation of solutions and numerical calculation

The Sturm–Liouville differential equation () with boundary conditions may be solved analytically, which can be exact or provide an approximation, by the
Rayleigh–Ritz method The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. The name Rayleigh–Ritz is being debated ...
, or by the matrix-variational method of Gerck et al. Numerically, a variety of methods are also available. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places. * Shooting methods *
Finite difference method In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are di ...
* Spectral parameter power series method


Shooting methods

Shooting methods proceed by guessing a value of , solving an initial value problem defined by the boundary conditions at one endpoint, say, , of the interval , comparing the value this solution takes at the other endpoint with the other desired boundary condition, and finally increasing or decreasing as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues.


Spectral parameter power series method

The spectral parameter power series (SPPS) method makes use of a generalization of the following fact about homogeneous second-order linear ordinary differential equations: if is a solution of equation () that does not vanish at any point of , then the function y(x) \int_a^x \frac is a solution of the same equation and is linearly independent from . Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value (often ; it does not need to be an eigenvalue) and any solution of () with which does not vanish on . (Discussion below of ways to find appropriate and .) Two sequences of functions , on , referred to as ''iterated integrals'', are defined recursively as follows. First when , they are taken to be identically equal to 1 on . To obtain the next functions they are multiplied alternately by and and integrated, specifically, for : The resulting iterated integrals are now applied as coefficients in the following two power series in ''λ'': u_0 = y_0 \sum_^\infty \left (\lambda-\lambda_0^* \right )^k \tilde X^, u_1 = y_0 \sum_^\infty \left (\lambda-\lambda_0^* \right )^k X^. Then for any (real or complex), and are linearly independent solutions of the corresponding equation (). (The functions and take part in this construction through their influence on the choice of .) Next one chooses coefficients and so that the combination satisfies the first boundary condition (). This is simple to do since and , for . The values of and provide the values of and and the derivatives and , so the second boundary condition () becomes an equation in a power series in . For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in whose roots are approximations of the sought-after eigenvalues. When , this reduces to the original construction described above for a solution linearly independent to a given one. The representations () and () also have theoretical applications in Sturm–Liouville theory.


Construction of a nonvanishing solution

The SPPS method can, itself, be used to find a starting solution . Consider the equation ; i.e., , , and are replaced in () by 0, , and respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue . While there is no guarantee that or will not vanish, the complex function will never vanish because two linearly-independent solutions of a regular Sturm–Liouville equation cannot vanish simultaneously as a consequence of the
Sturm separation theorem In mathematics, in the field of ordinary differential equations, Sturm separation theorem, named after Jacques Charles François Sturm, describes the location of roots of solutions of homogeneous second order linear differential equations. Basic ...
. This trick gives a solution of () for the value . In practice if () has real coefficients, the solutions based on will have very small imaginary parts which must be discarded.


See also

*
Normal mode A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. ...
*
Oscillation theory In mathematics, in the field of ordinary differential equations, a nontrivial solution to an ordinary differential equation :F(x,y,y',\ \dots,\ y^)=y^ \quad x \in roots; otherwise it is called non-oscillating. The differential equation is called o ...
*
Self-adjoint In mathematics, and more specifically in abstract algebra, an element ''x'' of a *-algebra is self-adjoint if x^*=x. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection ''C'' of elements of a st ...
*
Variation of parameters In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. For first-order inhomogeneous linear differential equations it is usually possible t ...
*
Spectral theory of ordinary differential equations In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dis ...
* Atkinson–Mingarelli theorem


References


Further reading

* * * * (Chapter 5) * (see Chapter 9 for singular Sturm–Liouville operators and connections with quantum mechanics) * * (See Chapter 8, part B, for excerpts from the works of Sturm and Liouville and commentary on them.) * {{DEFAULTSORT:Sturm-Liouville theory Ordinary differential equations Operator theory Spectral theory Partial differential equations Boundary value problems